LIDA (cognitive architecture)
The LIDA (Learning Intelligent Distribution Agent) cognitive architecture attempts to model a broad spectrum of cognition in biological systems, from low-level perception/action to high-level reasoning. Developed primarily by Stan Franklin and colleagues at the University of Memphis, the LIDA architecture is empirically grounded in cognitive science and cognitive neuroscience. It is an extension of IDA, which adds mechanisms for learning.[1] In addition to providing hypotheses to guide further research, the architecture can support control structures for software agents and robots. Providing plausible explanations for many cognitive processes, the LIDA conceptual model is also intended as a tool with which to think about how minds work.
Two hypotheses underlie the LIDA architecture and its corresponding conceptual model: 1) Much of human cognition functions by means of frequently iterated (~10 Hz) interactions, called cognitive cycles, between conscious contents, the various memory systems and action selection. 2) These cognitive cycles, serve as the "atoms" of cognition of which higher-level cognitive processes are composed.
Overview
[edit]Though it is neither symbolic nor strictly connectionist, LIDA is a hybrid architecture in that it employs a variety of computational mechanisms, chosen for their psychological plausibility. The LIDA cognitive cycle is composed of modules and processes employing these mechanisms.
Computational mechanisms
[edit]The LIDA architecture uses several modules,[2] including variants of the Copycat Architecture,[3][4] sparse distributed memory,[5][6] the schema mechanism,[7][8] the Behavior Net,[9][10] and the subsumption architecture.[11]
Psychological and neurobiological underpinnings
[edit]As a comprehensive, conceptual and computational cognitive architecture the LIDA architecture is intended to model a large portion of human cognition.[12][13] Comprising a broad array of cognitive modules and processes, the LIDA architecture attempts to implement and flesh out a number of psychological and neuropsychological theories including Global Workspace Theory,[14] situated cognition,[15] perceptual symbol systems,[16] working memory,[17] memory by affordances,[18] long-term working memory,[19] and the H-CogAff architecture.[20]
Codelets
[edit]LIDA relies heavily on what Franklin calls codelets. A codelet is a "special purpose, relatively independent, mini-agent typically implemented as a small piece of code running as a separate thread."[21]
Cognitive cycle
[edit]The LIDA cognitive cycle can be subdivided into three phases: understanding, consciousness, and action selection (which includes learning).[2]
In the understanding phase, incoming stimuli activate low-level feature detectors in sensory memory. The output engages perceptual associative memory where higher-level feature detectors feed in to more abstract entities such as objects, categories, actions, events, etc. The resulting percept moves to the Workspace where it cues both Transient Episodic Memory and Declarative Memory producing local associations. These local associations are combined with the percept to generate a current situational model which is the agent's understanding of what is going on right now.[2]
In the consciousness phase, "attention codelets" form coalitions by selecting portions of the situational model and moving them to the Global Workspace. These coalitions then compete for attention. The winning coalition becomes the content of consciousness and is broadcast globally.[2]
These conscious contents are then broadcast globally, initiating the learning and action selection phase. New entities and associations, and the reinforcement of old ones, occur as the conscious broadcast reaches the various forms of memory, perceptual, episodic and procedural. In parallel with all this learning, and using the conscious contents, possible action schemes are instantiated from Procedural Memory and sent to Action Selection, where they compete to be the behavior selected for this cognitive cycle. The selected behavior triggers sensory-motor memory to produce a suitable algorithm for its execution, which completes the cognitive cycle.[2]
This process repeats continuously, with each cycle representing a cognitive "moment" that contributes to higher-level cognitive processes.[2]
History
[edit]Virtual Mattie (V-Mattie) is a software agent[22] that gathers information from seminar organizers, composes announcements of next week's seminars, and mails them each week to a list that it keeps updated, all without the supervision of a human.[23] V-Mattie employed many of the computational mechanisms mentioned above.
Baars' Global Workspace Theory (GWT) inspired the transformation of V-Mattie into Conscious Mattie, a software agent with the same domain and tasks whose architecture included a consciousness mechanism à la GWT. Conscious Mattie was the first functionally, though not phenomenally, conscious software agent. Conscious Mattie gave rise to IDA.
IDA (Intelligent Distribution Agent) was developed for the US Navy[24][25][26] to fulfill tasks performed by human resource personnel called detailers. At the end of each sailor's tour of duty, he or she is assigned to a new billet. This assignment process is called distribution. The Navy employs almost 300 full time detailers to effect these new assignments. IDA's task is to facilitate this process, by automating the role of detailer. IDA was tested by former detailers and accepted by the Navy. Various Navy agencies supported the IDA project to the tune of some $1,500,000.
The LIDA (Learning IDA) architecture was originally spawned from IDA by the addition of several styles and modes of learning,[27][28][29] but has since then grown to become a much larger and generic software framework.[30][31]
Footnotes
[edit]- ^ "Projects: IDA and LIDA". University of Memphis. Retrieved 2024-08-25.
- ^ a b c d e f J. Baars, Bernard; Franklin, Stan (2009). "Consciousness is computational: The Lida model of global workspace theory". International Journal of Machine Consciousness. 01: 23–32. doi:10.1142/s1793843009000050.
- ^ Hofstadter, D. (1995). Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. New York: Basic Books.
- ^ Marshall, J. (2002). Metacat: A self-watching cognitive architecture for analogy-making. In W. D. Gray & C. D. Schunn (eds.), Proceedings of the 24th Annual Conference of the Cognitive Science Society, pp. 631-636. Mahwah, NJ: Lawrence Erlbaum Associates
- ^ Kanerva, P. (1988). Sparse Distributed Memory. Cambridge MA: The MIT Press
- ^ Rao, R. P. N., & Fuentes, O. (1998). Hierarchical Learning of Navigational Behaviors in an Autonomous Robot using a Predictive Sparse Distributed Memory Archived 2017-08-10 at the Wayback Machine. Machine Learning, 31, 87-113
- ^ Drescher, G.L. (1991). Made-up minds: A Constructivist Approach to Artificial Intelligence
- ^ Chaput, H. H., Kuipers, B., & Miikkulainen, R. (2003). Constructivist Learning: A Neural Implementation of the Schema Mechanism. Paper presented at the Proceedings of WSOM '03: Workshop for Self-Organizing Maps, Kitakyushu, Japan
- ^ Maes, P. 1989. How to do the right thing. Connection Science 1:291-323
- ^ Tyrrell, T. (1994). An Evaluation of Maes's Bottom-Up Mechanism for Behavior Selection. Adaptive Behavior, 2, 307-348
- ^ Brooks, R.A. Intelligence without Representation. Artificial intelligence, 1991. Elsevier
- ^ Franklin, S., & Patterson, F. G. J. (2006). The LIDA Architecture: Adding New Modes of Learning to an Intelligent, Autonomous, Software Agent IDPT-2006 Proceedings (Integrated Design and Process Technology): Society for Design and Process Science
- ^ Franklin, S., Ramamurthy, U., D'Mello, S., McCauley, L., Negatu, A., Silva R., & Datla, V. (2007). LIDA: A computational model of global workspace theory and developmental learning. In AAAI Fall Symposium on AI and Consciousness: Theoretical Foundations and Current Approaches. Arlington, VA: AAAI
- ^ Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge: Cambridge University Press
- ^ Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind. Cambridge, Massachusetts: MIT Press
- ^ Barsalou, L. W. 1999. Perceptual symbol systems. Behavioral and Brain Sciences 22:577–609. MA: The MIT Press
- ^ Baddeley, A. D., & Hitch, G. J. (1974). Working memory. In G. A. Bower (Ed.), The Psychology of Learning and Motivation (pp. 47–89). New York: Academic Press
- ^ Glenberg, A. M. 1997. What memory is for. Behavioral and Brain Sciences 20:1–19
- ^ Ericsson, K. A., and W. Kintsch. 1995. Long-term working memory. Psychological Review 102:21–245
- ^ Sloman, A. 1999. What Sort of Architecture is Required for a Human-like Agent? In Foundations of Rational Agency, ed. M. Wooldridge, and A. Rao. Dordrecht, Netherlands: Kluwer Academic Publishers
- ^ Franklin, Stan (January 2003). "IDA: A conscious artifact?". Journal of Consciousness Studies.
- ^ Franklin, S., & Graesser, A., 1997. Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, published as Intelligent Agents III, Springer-Verlag, 1997, 21-35
- ^ Franklin, S., Graesser, A., Olde, B., Song, H., & Negatu, A. (1996, Nov). Virtual Mattie—an Intelligent Clerical Agent. Paper presented at the Symposium on Embodied Cognition and Action: AAAI, Cambridge, Massachusetts.
- ^ Franklin, S., Kelemen, A., & McCauley, L. (1998). IDA: A Cognitive Agent Architecture IEEE Conf on Systems, Man and Cybernetics (pp. 2646–2651 ): IEEE Press
- ^ Franklin, S. (2003). IDA: A Conscious Artifact? Journal of Consciousness Studies, 10, 47–66
- ^ Franklin, S., & McCauley, L. (2003). Interacting with IDA. In H. Hexmoor, C. Castelfranchi & R. Falcone (Eds.), Agent Autonomy (pp. 159–186 ). Dordrecht: Kluwer
- ^ D'Mello, Sidney K., Ramamurthy, U., Negatu, A., & Franklin, S. (2006). A Procedural Learning Mechanism for Novel Skill Acquisition. In T. Kovacs & James A. R. Marshall (Eds.), Proceeding of Adaptation in Artificial and Biological Systems, AISB'06 (Vol. 1, pp. 184–185). Bristol, England: Society for the Study of Artificial Intelligence and the Simulation of Behaviour
- ^ Franklin, S. (2005, March 21–23, 2005). Perceptual Memory and Learning: Recognizing, Categorizing, and Relating. Paper presented at the Symposium on Developmental Robotics: American Association for Artificial Intelligence (AAAI), Stanford University, Palo Alto CA, USA
- ^ Franklin, S., & Patterson, F. G. J. (2006). The LIDA Architecture: Adding New Modes of Learning to an Intelligent, Autonomous, Software Agent IDPT-2006 Proceedings (Integrated Design and Process Technology): Society for Design and Process Science
- ^ Franklin, S., & McCauley, L. (2004). Feelings and Emotions as Motivators and Learning Facilitators Architectures for Modeling Emotion: Cross-Disciplinary Foundations, AAAI 2004 Spring Symposium Series (Vol. Technical Report SS-04-02 pp. 48–51). Stanford University, Palo Alto, California, USA: American Association for Artificial Intelligence
- ^ Negatu, A., D'Mello, Sidney K., & Franklin, S. (2007). Cognitively Inspired Anticipation and Anticipatory Learning Mechanisms for Autonomous Agents. In M. V. Butz, O. Sigaud, G. Pezzulo & G. O. Baldassarre (Eds.), Proceedings of the Third Workshop on Anticipatory Behavior in Adaptive Learning Systems (ABiALS 2006) (pp. 108-127). Rome, Italy: Springer Verlag
External links
[edit]- LIDA architecture Cognitive Computing Research Group, Memphis University
- database of possible neural correlates of LIDA modules and processes
- How Minds Work" tutorial
- mention of LIDA in Bot shows signs of consciousness by Celeste Biever, New Scientist 1 April 2011